Skip to content

Conversation

@slp
Copy link
Collaborator

@slp slp commented Nov 12, 2025

Imago supports both operations, so let's just enable them, as they may have a significant impact on both performance and disk usage on the host.

@slp
Copy link
Collaborator Author

slp commented Nov 12, 2025

Fixes #360

@slp
Copy link
Collaborator Author

slp commented Nov 12, 2025

This one needs an update of the imago crate.

slp added 5 commits November 27, 2025 15:55
We need the latest version of the image crate to support F_DISCARD
and F_WRITE_ZEROES.

Signed-off-by: Sergio Lopez <[email protected]>
Extend VirtioBlkConfig enough to cover up to write_zeroes_may_unmap, as
we're going to need those fields for implementing support both
F_DISCARD and F_WRITE_ZEROES.

Signed-off-by: Sergio Lopez <[email protected]>
Imago's discard_to_* operations require a mutable reference to Storage.
Let's wrap disk_image in a Mutex so we can obtain such reference when
needed. Since, in practice, we won't have multiple threads accessing the
lock, the overhead should be very small (one or two operations on
atomics).

Signed-off-by: Sergio Lopez <[email protected]>
The F_DISCARD feature enables the guest to request us to free up sectors
on the backing storage. Since imago does most of the heavy lifting, we
just need to announce the feature and call to imago's "discard_to_any"
then needed.

Signed-off-by: Sergio Lopez <[email protected]>
The F_WRITE_ZEROES feature enables the guest to instruct the host to
write zeroes to the disk image without actually having to copy any data
around. Since imago does the heavy lifting for us, we just need to
announce the feature and call to the right method when needed.

Signed-off-by: Sergio Lopez <[email protected]>
@slp slp force-pushed the support-discard-zeroes branch from 95ff720 to 9149ac4 Compare November 27, 2025 14:56
@slp slp marked this pull request as ready for review November 28, 2025 09:36
Copy link
Member

@tylerfanelli tylerfanelli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mention:

Since, in practice, we won't have multiple threads accessing the
lock, the overhead should be very small (one or two operations on
atomics).

Curious as to why we can be sure that multiple threads won't access the lock? Is there a dedicated thread for this?

@mtjhrc
Copy link
Collaborator

mtjhrc commented Dec 1, 2025

Curious as to why we can be sure that multiple threads won't access the lock? Is there a dedicated thread for this?

I think that is accurate, there is a single dedicated worker thread for the device and the main event loop thread in libkrun that will end up activating the device.

But still, I am a not really happy about adding the Mutex, because I think it should be avoidable if you are clear about the actual ownership of the disk object at any given time. Having had a quick look, to do this the object should be moved into the worker thread in BlockWorker::new and returned from the BlockWorker::work(), which would then return it from .join() on the thread, so you get the object back in the Block::reset(), you can then store it and move it into the worker thread on the next activate().

I'm not really worried about the performance here, but the code quality if we were to keep adding mutexes to places where they shouldn't be necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants